-
Notifications
You must be signed in to change notification settings - Fork 993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pods are preferentially scheduled to machines that meet the current session resources #2815
Conversation
a15a408
to
73b0116
Compare
/priority important-soon |
73b0116
to
2dcbdd8
Compare
Close the current pr, and fix the preemption problem through #2916 /close |
@wangyang0616: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The current pr was closed by mistake, reopen it. |
@wangyang0616: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
c841e5e
to
041dfeb
Compare
} | ||
} | ||
|
||
var node *api.NodeInfo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to define a new node
variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
if bestNode != nil { | ||
node = bestNode | ||
} else { | ||
klog.Errorf("task %s/%s allocate failed, bestNode is nil", task.Namespace, task.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The codes between 249 - 255 are redundent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code after line 250 uses node information for resource judgment, and it is necessary to ensure that the node pointer is not empty.
} | ||
switch { | ||
case len(nodes) == 0: | ||
klog.V(3).Infof("Task: %v, no matching node is found in the nodes list."+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here log level 3 is not so good. There will be too much log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -195,28 +195,64 @@ func (alloc *Action) Execute(ssn *framework.Session) { | |||
break | |||
} | |||
|
|||
var candidateNodes []*api.NodeInfo | |||
// When scheduling pods, gradient scoring is performed on all nodes that are successfully filtered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Candidate nodes are divided into two gradients:
// - the first gradient node: a list of free nodes that satisfy the task resource request;
// - The second gradient node: the node list whose sum of node idle resources and future idle meets the task resource request;
// Score the first gradient node first. If the first gradient node meets the requirements, ignore the second gradient node list, otherwise, score the second echelon node and select the appropriate node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
041dfeb
to
0bfd8d1
Compare
…ources, and then consider machines that are satisfied with future resources Signed-off-by: wangyang <wangyang8126@gmail.com>
0bfd8d1
to
5e48157
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: william-wang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
fix: #2782
Except that the binpack plugin has this problem, I understand that other algorithm plugins may encounter similar problems, such as task-topology, nodeorder, etc.
I was wondering if this generic problem could be solved by:
When allocate scores nodes, it divides nodes into two groups. One group is machines whose idle resources meet task resource requests, and the second group is futrue idle machines that meet task resource demands.
First, score the first group of machines, and if a suitable machine can be found, schedule the task to a suitable node; if the first group does not have a machine that meets the resource request, then score the second group of machines, and then select a suitable node for scheduling.
In this way, the pod can be dispatched to the machine that meets the resource requirements in the current session first, so that the pod will not be pending for a long time. If all the machines in the current session do not meet the requirements, it can also be scheduled to wait in the machine that meets the futrue idle.